Skip to main content

All Questions

1vote
0answers
159views

Is object-based representation of the observation space feasible?

I just started working on a DRL project from scratch. The state of each episode can be expressed as a state set $S=(S^A, S^B, S^C, S^D)$. Each subset is a feature set of a constituent component of the ...
Shahin's user avatar
0votes
1answer
76views

How to let the agent choose how to populate a state space matrix in RL (using python)

I have an agent (drone) that has to allocate subchannels for different types of User Equipment. I have represented the subchannel allocation with a 2-dimentional binary matrix, that is initialized to ...
Ness's user avatar
  • 206
0votes
2answers
83views

Should I build an environment from scratch myself or it is not always needed?

I am inspired by the paper Neural Architecture Search with Reinforcement Learning to use reinforcement learning for optimizing a child network (learner). My meta-learner (controller or parent network) ...
samsambakster's user avatar
1vote
1answer
257views

Once the environments are vectorized, how do I have to gather immediate experiences for the agent?

My main purpose right now is to train an agent using the A2C algorithm to solve the Atari Breakout game. So far I have succeeded to create that code with a single agent and environment. To break the ...
jgauth's user avatar

close